INTERSPEECH.2023 - Analysis and Assessment

Total: 217

#1 Automatic assessments of dysarthric speech: the usability of acoustic-phonetic features [PDF] [Copy] [Kimi1]

Authors: Loes van Bemmel ; Chiara Pesenti ; Xue Wei ; Helmer Strik

Individuals with dysarthria suffer from difficulties in speech production and consequent reductions in speech intelligibility, which is an important concept for diagnosing and assessing effectiveness of speech therapy. In the current study, we investigate which acoustic-phonetic features are most relevant and important in automatically assessing intelligibility and in classifying speech as healthy or dysarthric. After feature selection, we applied a stepwise linear regression to predict intelligibility ratings and a Linear Discriminant Analysis to classify healthy and dysarthric speech. We observed a very strong correlation between actual and predicted intelligibility ratings in the regression analysis. We also observed a high classification accuracy of 98.06% by using 17 features and a comparable, high accuracy of 96.11% with only two features. These results indicate the usefulness of the acoustic-phonetic features in automatic assessments of dysarthric speech

#2 Classification of Multi-class Vowels and Fricatives From Patients Having Amyotrophic Lateral Sclerosis with Varied Levels of Dysarthria Severity [PDF1] [Copy] [Kimi1]

Authors: Chowdam Venkata Thirumala Kumar ; Tanuka Bhattacharjee ; Yamini Belur ; Atchayaram Nalini ; Ravi Yadav ; Prasanta Kumar Ghosh

Dysarthria due to Amyotrophic Lateral Sclerosis (ALS) progressively distorts the acoustic space affecting the discriminability of different vowels and fricatives. However, the extent to which this happens with increasing severity is not thoroughly investigated. In this work, we perform automatic 4-class vowel (/a/, /i/, /o/, /u/) and 3-class fricative (/s/, /sh/, /f/) classification at varied severity levels and compare the performances with those from manual classification (through listening tests). Experiments with speech data from 119 ALS and 40 healthy subjects suggest that the manual and automatic classification accuracies reduce with an increase in dysarthria severity reaching 59.22% and 61.67% for vowels and 41.78% and 38.00% for fricatives, respectively, at the most severe cases. While manual classification is better than automatic one for all severity levels except the highest severity case for vowels, the difference between the two gradually reduces with an increase in severity.

#3 Parameter-efficient Dysarthric Speech Recognition Using Adapter Fusion and Householder Transformation [PDF2] [Copy] [Kimi1]

Authors: Jinzi Qi ; Hugo Van hamme

In dysarthric speech recognition, data scarcity and the vast diversity between dysarthric speakers pose significant challenges. While finetuning has been a popular solution, it can lead to overfitting and low parameter efficiency. Adapter modules offer a better solution, with their small size and easy applicability. Additionally, Adapter Fusion can facilitate knowledge transfer from multiple learned adapters, but may employ more parameters. In this work, we apply Adapter Fusion for target speaker adaptation and speech recognition, achieving acceptable accuracy with significantly fewer speaker-specific trainable parameters than classical finetuning methods. We further improve the parameter efficiency of the fusion layer by reducing the size of query and key layers and using Householder transformation to reparameterize the value linear layer. Our proposed fusion layer achieves comparable recognition results to the original method with only one third of the parameters.

#4 Few-shot Dysarthric Speech Recognition with Text-to-Speech Data Augmentation [PDF] [Copy] [Kimi1]

Authors: Enno Hermann ; Mathew Magimai.-Doss

Speakers with dysarthria could particularly benefit from assistive speech technology, but are underserved by current automatic speech recognition (ASR) systems. The differences of dysarthric speech pose challenges, while recording large amounts of training data can be exhausting for patients. In this paper, we synthesise dysarthric speech with a FastSpeech 2-based multi-speaker text-to-speech (TTS) system for ASR data augmentation. We evaluate its few-shot capability by generating dysarthric speech with as few as 5 words from an unseen target speaker and then using it to train speaker-dependent ASR systems. The results indicated that, while the TTS output is not yet of sufficient quality, this could allow easy development of personalised acoustic models for new dysarthric speakers and domains in the future.

#5 Latent Phrase Matching for Dysarthric Speech [PDF] [Copy] [Kimi1]

Authors: Dianna Yee ; Colin Lea ; Jaya Narain ; Zifang Huang ; Lauren Tooley ; Jeffrey P. Bigham ; Leah Findlater

Many consumer speech recognition systems are not tuned for people with speech disabilities, resulting in poor recognition and user experience, especially for severe speech differences. Recent studies has emphasized interest in designing and improving personalized speech models for atypical speech. We propose a query-by-example-based personalized phrase recognition system that is trained using small amounts of speech, is language agnostic, does not assume a traditional pronunciation lexicon, and generalizes well across speech difference severities. On an internal dataset collected from 32 people with dysarthria, this approach works regardless of severity and shows a 60% improvement in recall relative to a commercial speech recognition system. On the public EasyCall dataset of dysarthric speech, our approach improves accuracy by 30.5%. Performance degrades as the number of phrases increases, but consistently outperforms ASR systems when trained with 50 unique phrases.

#6 Speech Intelligibility Assessment of Dysarthric Speech by using Goodness of Pronunciation with Uncertainty Quantification [PDF] [Copy] [Kimi1]

Authors: Eun Jung Yeo ; Kwanghee Choi ; Sunhee Kim ; Minhwa Chung

This paper proposes an improved Goodness of Pronunciation (GoP) that utilizes Uncertainty Quantification (UQ) for automatic speech intelligibility assessment for dysarthric speech. Current GoP methods rely heavily on neural network-driven overconfident predictions, which is unsuitable for assessing dysarthric speech due to its significant acoustic differences from healthy speech. To alleviate the problem, UQ techniques were used on GoP by 1) normalizing the phoneme prediction (entropy, margin, maxlogit, logit-margin) and 2) modifying the scoring function (scaling, prior normalization). As a result, prior-normalized maxlogit GoP achieves the best performance, with a relative increase of 5.66%, 3.91%, and 23.65% compared to the baseline GoP for English, Korean, and Tamil, respectively. Furthermore, phoneme analysis is conducted to identify which phoneme scores significantly correlate with intelligibility scores in each language.

#7 Robust Prototype Learning for Anomalous Sound Detection [PDF] [Copy] [Kimi1]

Authors: Xiao-Min Zeng ; Yan Song ; Ian McLoughlin ; Lin Liu ; Li-Rong Dai

In this paper, we present a robust prototype learning framework for anomalous sound detection (ASD), where prototypical loss is exploited to measure the similarity between samples and prototypes. We show that existing generative and discriminative based ASD methods can be unified into this framework from the perspective of prototypical learning. For ASD in recent DCASE challenges, extensions related to imbalanced learning are proposed to improve the robustness of prototypes learned from source and target domains. Specifically, balanced sampling and multiple-prototype expansion (MPE) strategies are proposed to address imbalances across attributes of source and target domains. Furthermore, a novel negative-prototype expansion (NPE) method is used to construct pseudo-anomalies to learn a more compact and effective embedding space for normal sounds. Evaluation on the DCASE2022 Task2 development dataset demonstrates the validity of the proposed prototype learning framework.

#8 A multimodal prototypical approach for unsupervised sound classification [PDF] [Copy] [Kimi1]

Authors: Saksham Singh Kushwaha ; Magdalena Fuentes

In the context of environmental sound classification, the adaptability of systems is key: which sound classes are interesting depends on the context and the user's needs. Recent advances in text-to-audio retrieval allow for zero-shot audio classification, but performance compared to supervised models remains limited. This work proposes a multimodal prototypical approach that exploits local audio-text embeddings to provide more relevant answers to audio queries, augmenting the adaptability of sound detection in the wild. We do this by first using text to query a nearby community of audio embeddings that best characterize each query sound, and select the group's centroids as our prototypes. Second, we compare unseen audio to these prototypes for classification. We perform multiple ablation studies to understand the impact of the embedding models and prompts. Our unsupervised approach improves upon the zero-shot state-of-the-art in three sound recognition benchmarks by an average of 12%.

#9 Robust Audio Anti-Spoofing with Fusion-Reconstruction Learning on Multi-Order Spectrograms [PDF] [Copy] [Kimi1]

Authors: Penghui Wen ; Kun Hu ; Wenxi Yue ; Sen Zhang ; Wanlei Zhou ; Zhiyong Wang

Robust audio anti-spoofing has been increasingly challeng- ing due to the recent advancements on deepfake techniques. While spectrograms have demonstrated their capability for anti- spoofing, complementary information presented in multi-order spectral patterns have not been well explored, which limits their effectiveness for varying spoofing attacks. Therefore, we propose a novel deep learning method with a spectral fusion- reconstruction strategy, namely S2pecNet, to utilise multi-order spectral patterns for robust audio anti-spoofing representations. Specifically, spectral patterns up to second-order are fused in a coarse-to-fine manner and two branches are designed for the fine-level fusion from the spectral and temporal contexts. A reconstruction from the fused representation to the input spec- trograms further reduces the potential fused information loss. Our method achieved the state-of-the-art performance with an EER of 0.77% on a widely used dataset - ASVspoof2019 LA Challenge.

#10 Adapting Language-Audio Models as Few-Shot Audio Learners [PDF] [Copy] [Kimi1]

Authors: Jinhua Liang ; Xubo Liu ; Haohe Liu ; Huy Phan ; Emmanouil Benetos ; Mark D. Plumbley ; Wenwu Wang

Contrastive language-audio pretraining (CLAP) has become a new paradigm to learn audio concepts with audio-text pairs. CLAP models have shown unprecedented performance as zero-shot classifiers on downstream tasks. To further adapt CLAP with domain-specific knowledge, a popular method is to finetune its audio encoder with available labelled examples. However, this is challenging in low-shot scenarios, as the amount of annotations is limited compared to the model size. In this work, we introduce a Training-efficient (Treff) adapter to rapidly learn with a small set of examples while maintaining the capacity for zero-shot classification. First, we propose a cross-attention linear model (CALM) to map a set of labelled examples and test audio to test labels. Second, we find initialising CALM as a cosine measurement improves our Treff adapter even without training. The Treff adapter beats metric-based methods in few-shot settings and yields competitive results to fully-supervised methods.

#11 TFECN: Time-Frequency Enhanced ConvNet for Audio Classification [PDF] [Copy] [Kimi1]

Authors: Mengwei Wang ; Zhe Yang

Recently, transformer-based models have shown leading performance in audio classification, gradually replacing the dominant ConvNet in the past. However, some research has shown that certain characteristics and designs in transformers can be applied to other architectures and make them achieve similar performance as transformers. In this paper, we introduce TFECN, a pure ConvNet that combines the design in transformers and has time-frequency enhanced convolution with large kernels. It can provide a global receptive field on the frequency dimension as well as avoid the influence of the convolution's shift-equivariance on the recognition of not shift-invariant patterns along the frequency axis. Furthermore, to use ImageNet-pretrained weights, we propose a method for transferring weights between kernels of different sizes. On the commonly used datasets AudioSet, FSD50K, and ESC50, our TFECN outperforms the models trained in the same way.

#12 Resolution Consistency Training on Time-Frequency Domain for Semi-Supervised Sound Event Detection [PDF] [Copy] [Kimi1]

Authors: Won-Gook Choi ; Joon-Hyuk Chang

The fact that unlabeled data can be used for supervised learning is of considerable relevance concerning polyphonic sound event detection (PSED) because of the high costs of frame-wise labeling. While semi-supervised learning (SSL) for image tasks has been extensively developed, SSL for PSED has not been substantially explored due to data augmentation limitations. In this paper, we propose a novel SSL strategy for PSED called resolution consistency training (ResCT), combining unsupervised terms with the mean teacher using different resolutions of a spectrogram for data augmentation. The proposed method regularizes the consistency between the model predictions for different resolutions by controlling the sampling rate and window size. Experimental results show that ResCT outperforms other SSL methods on various evaluation metrics: event-f1 score, intersection-f1 score, and PSDSs. Finally, we report on some ablation studies for the weak and strong augmentation policies.

#13 Fine-tuning Audio Spectrogram Transformer with Task-aware Adapters for Sound Event Detection [PDF] [Copy] [Kimi1]

Authors: Kang Li ; Yan Song ; Ian McLoughlin ; Lin Liu ; Jin Li ; Li-Rong Dai

In this paper, we present a task-aware fine-tuning method to transfer Patchout faSt Spectrogram Transformer (PaSST) model to sound event detection (SED) task. Pretrained PaSST has shown significant performance on audio tagging (AT) and SED tasks, but it is not optimal to fine-tune the model from a single layer as the local and semantic information have not been well exploited. To address this, we first introduce task-aware adapters including SED-adapter and AT-adapter to fine-tune PaSST for SED and AT task respectively, and then propose task-aware fine-tuning to combine local information from shallower layer with semantic information from deeper layer, based on task-aware adapters. Besides, we propose the self-distillated mean teacher (SdMT) to train a robust student model with soft pseudo labels from teacher. Experiments are conducted on DCASE2022 task4 development set, the EB-F1 of 64.85% and PSDS1 of 0.5548 are achieved which outperform previous state-of-the-art systems.

#14 Small Footprint Multi-channel Network for Keyword Spotting with Centroid Based Awareness [PDF] [Copy] [Kimi1]

Authors: Dianwen Ng ; Yang Xiao ; Jia Qi Yip ; Zhao Yang ; Biao Tian ; Qiang Fu ; Eng Siong Chng ; Bin Ma

Spoken Keyword Spotting (KWS) in noisy far-field environments is challenging for small-footprint models, given the restrictions on computational resources (e.g., model size, running memory). This is even more intricate when handling noises from multiple microphones. To address this, we present a new multi-channel model that uses a CNN-based network with a linear mixing unit to achieve local-global dependency representations. Our method enhances noise-robustness while ensuring more efficient computation. Besides, we propose an end-to-end centroid-based awareness module that provides class similarity awareness at the bottleneck level to correct ambiguous cases during prediction. We conducted experiments using real noisy far-field data from the MISP challenge 2021 and achieved SOTA results compared to existing small-footprint KWS models. Our best score of 0.126 is highly competitive against larger models like 3D-ResNet, which is 0.122, but ours is much smaller at 473K compared to 13M.

#15 Few-shot Class-incremental Audio Classification Using Adaptively-refined Prototypes [PDF] [Copy] [Kimi1]

Authors: Wei Xie ; Yanxiong Li ; Qianhua He ; Wenchang Cao ; Tuomas Virtanen

New classes of sounds constantly emerge with a few samples, making it challenging for models to adapt to dynamic acoustic environments. This challenge motivates us to address the new problem of few-shot class-incremental audio classification. This study aims to enable a model to continuously recognize new classes of sounds with a few training samples of new classes while remembering the learned ones. To this end, we propose a method to generate discriminative prototypes and use them to expand the model's classifier for recognizing sounds of new and learned classes. The model is first trained with a random episodic training strategy, and then its backbone is used to generate the prototypes. A dynamic relation projection module refines the prototypes to enhance their discriminability. Results on two datasets (derived from the corpora of Nsynth and FSD-MIX-CLIPS) show that the proposed method exceeds three state-of-the-art methods in average accuracy and performance dropping rate.

#16 Interpretable Latent Space Using Space-Filling Curves for Phonetic Analysis in Voice Conversion [PDF] [Copy] [Kimi1]

Authors: Mohammad Hassan Vali ; Tom Bäckström

Vector quantized variational autoencoders (VQ-VAE) are well-known deep generative models, which map input data to a latent space that is used for data generation. Such latent spaces are unstructured and can thus be difficult to interpret. Some earlier approaches have introduced a structure to the latent space through supervised learning by defining data labels as latent variables. In contrast, we propose an unsupervised technique incorporating space-filling curves into vector quantization (VQ), which yields an arranged form of latent vectors such that adjacent elements in the VQ codebook refer to similar content. We applied this technique to the latent codebook vectors of a VQ-VAE, which encode the phonetic information of a speech signal in a voice conversion task. Our experiments show there is a clear arrangement in latent vectors representing speech phones, which clarifies what phone each latent vector corresponds to and facilitates other detailed interpretations of latent vectors.

#17 Topological Data Analysis for Speech Processing [PDF] [Copy] [Kimi1]

Authors: Eduard Tulchinskii ; Kristian Kuznetsov ; Laida Kushnareva ; Daniil Cherniavskii ; Serguei Barannikov ; Irina Piontkovskaya ; Sergey Nikolenko ; Evgeny Burnaev

We apply topological data analysis (TDA) to speech classification problems and to the introspection of a pretrained speech model, HuBERT. To this end, we introduce a number of topological and algebraic features derived from Transformer attention maps and embeddings. We show that a simple linear classifier built on top of such features outperforms a fine-tuned classification head. We achieve an improvement of about 9% accuracy and 5% ERR on two common datasets; on CREMA-D, the proposed feature set reaches a new state of the art performance with accuracy 80.155. We also show that topological features are able to reveal functional roles of speech Transformer heads; e.g., we find the heads capable to distinguish between pairs of sample sources (natural/synthetic) or voices without any downstream fine-tuning. Our results demonstrate that TDA is a promising new approach for speech analysis, especially for tasks that require structural prediction.

#18 Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation [PDF] [Copy] [Kimi1]

Authors: Kangwook Jang ; Sungnyun Kim ; Se-Young Yun ; Hoirin Kim

Transformer-based speech self-supervised learning (SSL) models, such as HuBERT, show surprising performance in various speech processing tasks. However, huge number of parameters in speech SSL models necessitate the compression to a more compact model for wider usage in academia or small companies. In this study, we suggest to reuse attention maps across the Transformer layers, so as to remove key and query parameters while retaining the number of layers. Furthermore, we propose a novel masking distillation strategy to improve the student model's speech representation quality. We extend the distillation loss to utilize both masked and unmasked speech frames to fully leverage the teacher model's high-quality representation. Our universal compression strategy yields the student model that achieves phoneme error rate (PER) of 7.72% and word error rate (WER) of 9.96% on the SUPERB benchmark.

#19 Personalized Acoustic Scene Classification in Ultra-low Power Embedded Devices Using Privacy-preserving Data Augmentation [PDF] [Copy] [Kimi1]

Authors: Timm Koppelmann ; Semih Agcaer ; Rainer Martin

In this work we present an adaptation method for personalized acoustic scene classification in ultra-low power embedded devices (EDs). The computational limitation of EDs and a large variety of acoustic scenes may lead to poor performance of the embedded classifier in specific real-world user environments. We propose a semi-supervised scheme that estimates the audio feature distribution at ED level and then samples this statistical model to generate artificial data points which emulate user-specific audio features. Then, a second, cloud-based classifier assigns pseudo labels to samples, which are merged with existing labeled data for retraining the embedded classifier. The proposed method leads to significant performance improvements on user-specific data sets and does neither require a persistent connection to a cloud service nor the transmission of raw audio or audio features. It thus results in low data rates, high utility, and privacy-preservation.

#20 Background Domain Switch: A Novel Data Augmentation Technique for Robust Sound Event Detection [PDF] [Copy] [Kimi1]

Authors: Wei-Cheng Lin ; Luca Bondi ; Shabnam Ghaffarzadegan

Data augmentation is a key component to achieve robust and generalizable performance in sound event detection (SED). A well trained SED model should be able to resist the interference of non-target audio events and maintain a robust recognition rate under unknown and possibly mismatched testing conditions. In this study, we propose a novel background domain switch (BDS) data augmentation technique for SED. BDS utilizes a trained SED model on-the-fly to detect backgrounds in audio clips, and switches them among the data points to increase sample variability. This approach can be easily combined with other types of data augmentation techniques. We evaluate the effectiveness of BDS by applying it to several state-of-the-art SED frameworks, and used both publicly available datasets as well as a synthesized mismatched test set. Experiment results systematically show that BDS obtains significant performance improvements from all evaluation aspects. The code is available at: https://github.com/boschresearch/soundseebackgrounddomainswitch

#21 Joint Prediction of Audio Event and Annoyance Rating in an Urban Soundscape by Hierarchical Graph Representation Learning [PDF] [Copy] [Kimi1]

Authors: Yuanbo Hou ; Siyang Song ; Cheng Luo ; Andrew Mitchell ; Qiaoqiao Ren ; Weicheng Xie ; Jian Kang ; Wenwu Wang ; Dick Botteldooren

Sound events in daily life carry rich information about the objective world. The composition of these sounds affects the mood of people in a soundscape. Most previous approaches only focus on classifying and detecting audio events and scenes, but may ignore their perceptual quality that may impact humans' listening mood for the environment, e.g. annoyance. To this end, this paper proposes a novel hierarchical graph representation learning (HGRL) approach which links objective audio events (AE) with subjective annoyance ratings (AR) of the soundscape perceived by humans. The hierarchical graph consists of fine-grained event (fAE) embeddings with single-class event semantics, coarse-grained event (cAE) embeddings with multi-class event semantics, and AR embeddings. Experiments show the proposed HGRL successfully integrates AE with AR for AEC and ARP tasks, while coordinating the relations between cAE and fAE and further aligning the two different grains of AE information with the AR.

#22 Anomalous Sound Detection Using Self-Attention-Based Frequency Pattern Analysis of Machine Sounds [PDF] [Copy] [Kimi1]

Authors: Hejing Zhang ; Jian Guan ; Qiaoxi Zhu ; Feiyang Xiao ; Youde Liu

Different machines can exhibit diverse frequency patterns in their emitted sound. This feature has been recently explored in anomaly sound detection and reached state-of-the-art performance. However, existing methods rely on the manual or empirical determination of the frequency filter by observing the effective frequency range in the training data, which may be impractical for general application. This paper proposes an anomalous sound detection method using self-attention-based frequency pattern analysis and spectral-temporal information fusion. Our experiments demonstrate that the self-attention module automatically and adaptively analyses the effective frequencies of a machine sound and enhances that information in the spectral feature representation. With spectral-temporal information fusion, the obtained audio feature eventually improves the anomaly detection performance on the DCASE 2020 Challenge Task 2 dataset.

#23 Improving Audio-Text Retrieval via Hierarchical Cross-Modal Interaction and Auxiliary Captions [PDF] [Copy] [Kimi1]

Authors: Yifei Xin ; Yuexian Zou

Most existing audio-text retrieval (ATR) methods focus on constructing contrastive pairs between whole audio clips and complete caption sentences, while ignoring fine-grained crossmodal relationships, e.g., short segments and phrases or frames and words. In this paper, we introduce a hierarchical crossmodal interaction (HCI) method for ATR by simultaneously exploring clip-sentence, segment-phrase, and frame-word relationships, achieving a comprehensive multi-modal semantic comparison. Besides, we also present a novel ATR framework that leverages auxiliary captions (AC) generated by a pretrained captioner to perform feature interaction between audio and generated captions, which yields enhanced audio representations and is complementary to the original ATR matching branch. The audio and generated captions can also form new audio-text pairs as data augmentation for training. Experiments show that our HCI significantly improves the ATR performance. Moreover, our AC framework also shows stable performance gains on multiple datasets.

#24 Differential Privacy enabled Dementia Classification: An Exploration of the Privacy-Accuracy Trade-off in Speech Signal Data [PDF] [Copy] [Kimi1]

Authors: Suhas BN ; Sarah Rajtmajer ; Saeed Abdullah

Early detection of dementia is critical for effective symptom management. Recent studies have aimed to develop machine learning (ML) models to identify dementia onset and severity using language and speech features. However, existing methods can lead to serious privacy concerns due to sensitive data collected from a vulnerable population. In this work, we aim to establish the privacy-accuracy tradeoff benchmark for dementia classification models using audio and speech features. Specifically, we explore the effects of differential privacy (DP) on the training phase of the audio model. We then compare the classification accuracy of DP and non-DP models using a publicly available dataset. The resultant comparison provides useful insights to make informed decisions about the need for balancing privacy and accuracy tradeoff for dementia classification tasks. Our findings have implications for real-world deployment of ML models to support early detection and effective management of dementia.

#25 Learning Emotional Representations from Imbalanced Speech Data for Speech Emotion Recognition and Emotional Text-to-Speech [PDF] [Copy] [Kimi1]

Authors: Shijun Wang ; Jón Guðnason ; Damian Borth

Effective speech emotional representations play a key role in Speech Emotion Recognition (SER) and Emotional Text-To-Speech (TTS) tasks. However, emotional speech samples are more difficult and expensive to acquire compared with Neutral style speech, which causes one issue that most related works unfortunately neglect: imbalanced datasets. Models might overfit to the majority Neutral class and fail to produce robust and effective emotional representations. In this paper, we propose an Emotion Extractor to address this issue. We use augmentation approaches to train the model and enable it to extract effective and generalizable emotional representations from imbalanced datasets. Our empirical results show that (1) for the SER task, the proposed Emotion Extractor surpasses the state-of-the-art baseline on three imbalanced datasets; (2) the produced representations from our Emotion Extractor benefit the TTS model, and enable it to synthesize more expressive speech.